13 research outputs found

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations

    Underwater intervention robotics: An outline of the Italian national project Maris

    Get PDF
    The Italian national project MARIS (Marine Robotics for Interventions) pursues the strategic objective of studying, developing, and integrating technologies and methodologies to enable the development of autonomous underwater robotic systems employable for intervention activities. These activities are becoming progressively more typical for the underwater offshore industry, for search-and-rescue operations, and for underwater scientific missions. Within such an ambitious objective, the project consortium also intends to demonstrate the achievable operational capabilities at a proof-of-concept level by integrating the results with prototype experimental systems

    Robust Feature-based LIDAR Localization and Mapping in Unstructured Environments

    No full text
    In robotics, simultaneous localization and mapping (SLAM) is a fundamental capability for autonomous mobile robots. This thesis deal with the problem of mobile robot localization and mapping in human-made environments. The contribution of this work is an innovative SLAM method based on robot odometry and LIDAR features (both keypoint and descriptor). The presented method does not requires an initial configuration of the environment and therefore can be adopted wherever the robot can operate. Feature-based approaches are a class of methods well studied in computer vision and 3D point clouds processing, but relatively new in 2d range sensing. The proposed LIDAR keypoint detector, named FALKO, with two novel descriptors, BSC and CGH, have been designed to provide stability and repeatability in feature-based laser scan matching. FALKO achieves higher repeatably score and extracts less ephemeral points than the other state-of-the-art keypoint detectors. Moreover, the precision-recall curves of the proposed descriptors are consistent with the achievable results obtained from computer vision and laser scan data descriptors. This thesis also illustrated novel loop closure methods based on FALKO keypoints and a novel feature signature, named GLAROT, and compared their performance in both offline and online localization and mapping problems with state-of-the-art signature algorithms. Results show that the FALKO detector combined with GLAROT signature and point-to-point association outperforms the previously proposed approaches. In this thesis, a novel automatic calibration method that simultaneously computes the intrinsic and extrinsic parameters of a mobile robot compliant to the tricycle wheeled robot model, which is a common kinematic configuration of industrial AGVs, has been proposed. The calibration is performed by computing the parameters better fitting the input commands and the sensor egomotion estimation obtained from the sensor measurements. Finally, an application of LIDAR featured-based localization for industrial AGVs, along with the automatic calibration method evaluation, have been illustrated. Results show that the feature-based approach achieves performance comparable to artificial landmark localization and compliant with requirements of reliable navigation. Both static and dynamic feature mapping have been tested showing that a static map of FALKO features performs like the reflectors counterparts

    Efficient loop closure based on FALKO lidar features for online robot localization and mapping

    No full text
    Keypoint features detection from measurements enables efficient localization and map estimation through the compact representation and recognition of locations. The keypoint detector FALKO has been proposed to detect stable points in laser scans for localization and mapping tasks. In this paper, we present novel loop closure methods based on FALKO keypoints and compare their performance in online localization and mapping problems. The pose graph formulation is adopted, where each pose is associated to a local map of keypoints extracted from the corresponding laser scan. Loops in the graph are detected by matching local maps in two steps. First, the candidate matching scans are selected by comparing the scan signatures obtained from the keypoints of each scan. Second, the transformation between two scans is obtained by pairing and aligning the respective keypoint sets. Experiments with standard benchmark datasets assess the performance of FALKO and of the proposed loop closure algorithms in both offline and online localization and map estimation

    Investigation of vision-based underwater object detection with multiple datasets

    Get PDF
    In this paper, we investigate the potential of vision-based object detection algorithms in underwater environments using several datasets to highlight the issues arising in different scenarios. Underwater computer vision has to cope with distortion and attenuation due to light propaga‐ tion in water, and with challenging operating conditions. Scene segmentation and shape recognition in a single image must be carefully designed to achieve robust object detection and to facilitate object pose estimation. We describe a novel multi-feature object detection algorithm conceived to find human-made artefacts lying on the seabed. The proposed method searches for a target object according to a few general criteria that are robust to the underwater context, such as salient colour uniformity and sharp contours. We assess the performance of the proposed algorithm across different underwater datasets. The datasets have been obtained using stereo cameras of different quality, and diverge for the target object type and colour, acquisition depth and conditions. The effectiveness of the proposed approach has been experimentally demonstrated. Finally, object detection is discussed in connection with the simple colour-based segmentation and with the difficulty of tri-dimensional processing on noisy data

    Computer vision in underwater environments: A multiscale graph segmentation approach

    No full text
    In this paper, we propose a novel object detection algorithm for underwater environments exploiting multiscale graph-based segmentation. The graph-based approach to image segmentation is fairly independent from distortion, color alteration and other peculiar effects arising with light propagation in water medium. The algorithm is executed at different scales in order to capture both the contour and the general shape of the target cylindrical object. Next, the candidate regions extracted at different scales are merged together. Finally, the candidate region is validated by a shape regularity test. The proposed algorithm has been compared with a color clustering method on an underwater dataset and has achieved precise and accurate detection

    Issues in high performance vision systems design for underwater interventions

    No full text
    This paper describes the design and evaluation of a vision system conceived to provide perception in Autonomous Underwater Vehicle (AUV) intervention tasks. Due to the accuracy requirements inherent in manipulation tasks, high performance vision systems, enabling adequate perception capabilities, are needed to cope with underwater interventions. However, vision systems are challenged by the difficulties and variability of underwater environments as well as by the need to operate in a sealed canister. The vision system described in this paper addresses design issues like computational performance, energy power consumption, heat dissipation, and network capabilities. Even though the system has been designed to support stereovision, experiments in several underwater contexts have shown that stereovision is seldom applicable, due to the many problems faced by light propagation in water. Developing a system for underwater operation emphasizes the need for tradeoffs between computational performance and power consumption and dissipation, as well as the need for flexibility to support multiple vision processing pipelines and adapt to the specific underwater context
    corecore